The first chapter we're talking about today is complexity analysis in artificial intelligence.
Just to gauge the room, who of you has dealt with some sort of complexity analysis in their
before today. That's like a solid two-thirds. For you this will be a recap
probably. For the rest of you this is entirely new stuff. So if you have an
algorithm you usually have something associated with the algorithm that is
called the complexity of the algorithm and we need to deal with this for
several reasons. For example, we still live in the age of Murphy's law, that
means computing power per dollar still usually doubles every two years-ish and
that's often cited as the reason why eventually with this continuous with the
exponential with the exponential curve then at some point we will have
artificial general intelligence because there's just so much computing power
available. That's a bit fallacious under some circumstances I'd say. For example
it presupposes that our algorithms are roughly efficient, that we have an
algorithm that doesn't grow even faster than the computing power per
dollar per CPU, whatever have you. So we need to find some way of coping with how
complex an algorithm is. And that's usually a measure of the runtime of an
algorithm in relation to the input size. So if I give you a problem of size n,
whatever that size may be, it might be the the number of digits in a integer, in
a number, it may be the length of a list, it might be the size of a
database. For some input to your algorithm, you have some sort of time the algorithm
takes to finish whatever it is it's doing, and then we measure the time it
took related to the size of the input in some sort of complexity measure.
Here's three examples. We have a linear algorithm that takes 100 microseconds
times n, where n is any size of the input, length of a list, number of digits
in a number and whatever. Then we have a quadratic algorithm that
takes 7 n squared microseconds, and we have an exponential algorithm that takes
2 to the n microseconds. Up until here, they are basically the same in the
sense that you, executing the algorithm from your terminal, would probably not
notice the difference. Who here would think that they have the the capability
to tell between like 175 microseconds and 1 millisecond? You might, but
probably not. And even if you do notice the difference, it's probably not going
to matter anyway. But in the case where you, so this is for one input,
for five inputs, for ten inputs, or for, and then for 45 inputs, it gets really
interesting because the linear algorithm takes 4.5 microseconds, again basically
instantly. The quadratic algorithm takes 14 microseconds, still basically
instantly, and the exponential algorithm that took 1 millisecond for
just 10 inputs now suddenly takes over a year. Because of course you have, if you
have time versus n, you have a linear one, and you have a maybe quadratic one,
and you have an exponential one that's just growing ridiculously fast. And even
though for small input sizes, there's basically no difference between the three
of them, or there might be even an advantage to the algorithm with a higher
complexity. You can see for example that in the size, in the case of just one
input, the 100, the linear algorithm takes just 100 microseconds, but the
quadratic one takes 7 microseconds. That's way less, but in the end the
linear one will be the shorter runtime.
Yeah, what? One year? Yes. So if we have 2 to the 10 is 1024, that's roughly, I
think the slides are cut off at some point, oh well. Okay, but 2 to the 45 is
already like that much of microseconds. And if you take that, then you
already arrive at 1.1 year. And if you extend the table to reasonable sizes of
data input, like what you would have, like you want to analyze like 10,000
Presenters
Jonas Betzendahl
Zugänglich über
Offener Zugang
Dauer
00:26:32 Min
Aufnahmedatum
2020-10-26
Hochgeladen am
2020-10-26 11:16:50
Sprache
en-US
Explanation of time/space complexity, complexity classes and the meaning of O.